25 research outputs found
Neurosymbolic Spike Concept Learner towards Neuromorphic General Intelligence
Current research in the area of concept learning makes use of deep learning and ensembles methods to learn concepts. Concept learning allows us to combine heterogeneous entities in data which could collectively identify as individual concepts. Heterogeneity and compositionality are crucial areas to explore in machine learning as it has the potential to contribute profoundly to artificial general intelligence. We investigate the use of spiking neural networks for concept learning. Spiking neurones inclusively model the temporal properties as observed in biological neurones. A benefit of spike-based neurones allows for localised learning rules that only adapts connections between relevant neurones. In this position paper, we propose a technique allowing dynamic formation of synapse (connections) in spiking neural networks, the basis of structural plasticity. Achieving dynamic formation of synapse allows for a unique approach to concept learning with a malleable neural structure. We call this technique Neurosymbolic Spike-Concept Learner (NS-SCL). The limitations of NS-SCL can be overcome with the neuromorphic computing paradigm. Furthermore, introducing NS-SCL as a technique on neuromorphic platforms should motivate a new direction of research towards Neuromorphic General Intelligence (NGI), a term we define to some extent
Towards an Ontology-Based Approach to Measuring Productivity for Offsite Manufacturing Method
The steady decline of manual and skilled trades in the construction industry has increased the recognition of offsite manufacturing (OSM), an aspect of Design for Manufacture and Assembly (DFMA) methods as one way to boost productivity and performance. However, existing productivity estimation approaches are carried out in isolation thus limiting the sort of result obtained from such systems. Also, there is yet to be a holistic approach that enables productivity estimation using different metrics and integrates experts’ knowledge to predict productivity and guide decision making at the early development stage of a project. This study aims to develop a method that can be used to generate multiple estimations for all these metrics simultaneously through linking their relationships. An ontology-based knowledge modelling approach for estimating productivity at the production stage for OSM projects is proposed. A case study of panel system offsite is used as a proof-of-concept for data collection and knowledge modelling in an ontology. Results from the study through the use of rules and semantic reasoning retrieved cost estimates and time schedule for a panel system production with considerations for different design choices. It is thus proven that systemising the production process knowledge of OSM methods enables practitioners to make informed choices on product design to best suit productivity requirements. The developed method helps to reduce the level of uncertainty by encouraging measurable evidence and allows for better decision-making on productivity
Demystifying the concept of offsite manufacturing method: Towards a robust definition and classification system
Purpose
This study aims to develop a more inclusive working definition and a formalised classification system for offsite construction to enable common basis of evaluation and communication. Offsite manufacturing (OSM) is continuously getting recognised as a way to increase efficiency and boost productivity of the construction industry in many countries. However, the knowledge of OSM varies across different countries, construction practices and individual experts thus resulting into major misconceptions. The lack of consensus of what OSM is and what constitutes its methods creates a lot of misunderstanding across Architecture Engineering and Construction (AEC) industry professionals, therefore, inhibiting a global view and understanding for multicultural collaboration. Therefore, there is a need to revisit these issues with the aim to develop a deep understanding of the concepts and ascertain what is deemed inclusive or exclusive.
Design/methodology/approach
A state-of-the-art review and analysis of literature on OSM was conducted to observe trends in OSM definitions and classifications. The paper identifies gaps in existing methods and proposes a future direction.
Findings
Findings suggest that classifications are mostly aimed towards a particular purpose and existing classification system are not robust enough to cover all aspects. Therefore, there is need to extend these classification systems to be fit for various purposes.
Originality/value
This paper contributes to the body of literature on offsite concepts, definition and classification, and provides knowledge on the broader context on the fundamentals of OSM
ARP cache poisoning mitigation and forensics investigation
Address Resolution Protocol (ARP) cache spoofing or poisoning is an OSI layer 2 attack that exploits the statelessness vulnerability of the protocol to make network hosts susceptible to issues such as Man in the Middle attack, host impersonation, Denial of Service (DoS) and session hijacking. In this paper, a quantitative research approach is used to propose forensic tools for capturing evidences and mitigating ARP cache poisoning. The baseline approach is adopted to validate the proposed tools. The evidences captured before attack are compared against evidences captured when the network is under attack in order to ascertain the validity of the proposed tools in capturing ARP cache spoofing evidences. To mitigate the ARP poisoning attack, the security features DHCP Snooping and Dynamic ARP Inspection (DAI) are enabled and configured on a Cisco switch. The experimentation results showed the effectiveness of the proposed mitigation technique
Recommended from our members
PROCESS MODELS DISCOVERY AND TRACES CLASSIFICATION: A FUZZY-BPMN MINING APPROACH.
The discovery of useful or worthwhile process models must be performed with due regards to the transformation that needs to be achieved. The blend of the data representations (i.e data mining) and process modelling methods, often allied to the field of Process Mining (PM), has proven to be effective in the process analysis of the event logs readily available in many organisations information systems. Moreover, the Process Discovery has been lately seen as the most important and most visible intellectual challenge related to the process mining. The method involves automatic construction of process models from event logs about any domain process, and describes causal dependencies between the various activities as performed within the process execution environment. In principle, one can use process discovery to obtain process models that describes reality. To this end, the work in this artcle presents a Fuzzy-BPMN mining approach that uses training events log representing 10 different real-time business process executions to provide a method for discovery of useful process models, and then cross-validating the derived models with a set of test event logs in order to measure the accuracy and performance of the employed approach. The method focuses on carrying out a classification task to determine the traces, i.e. individual cases that makes up the test event logs in order to determine which traces that can be replayed by the original model. Thus, the paper aim is to provide a technique for process models discovery which is as good in balancing between “overfitting” and “underfitting” as it is able to correctly classify the traces that can be replayed (i.e allowed) or non-replayable (disallowed) by the model. In other words, the study shows through the Fuzzy-BPMN replaying notation and the series of validation experiments - how given any classified trace (for the test events log) and discovered process model (the training log) it can be unambiguously determined whether or not the traces found can be replayed on the discovered model
ifcOWL-DfMA a new ontology for the offsite construction domain
Architecture, Engineering and Construction (AEC) is a fragmented in-dustry dealing with heterogeneous data formats coming from different domains. Building Information Modelling (BIM) is one of the most important efforts to manage information collaboratively within the AEC industry. The Industry Foun-dation Classes (IFC) can be used as a data format to achieve data exchange be-tween diverse software applications in a BIM process. The advantage of using Semantic Web Technologies to overcome these challenges has been recognised by the AEC community and the ifcOWL ontology, which transforms the IFC schema to a Web Ontology Language (OWL) representation, is now a de facto standard. Even though the ifcOWL ontology is very extensive, there is a lack of detailed knowledge representation in terms of process and sub-processes explain-ing Design for Manufacturing and Assembly (DfMA) for offsite construction, and also a lack of knowledge on how product and productivity measurement such as production costs and durations are incurred, which is essential for evaluation of alternative DfMA design options. In this article we present a new ontology named ifcOWL-DfMA as a new domain specific module for ifcOWL with the aim of representing offsite construction domain terminology and relationships in a machine-interpretable format. This ontology will play the role of a core vocab-ulary for the DfMA design management and can be used in many scenarios such as life cycle cost estimation. To demonstrate the usage of ifcOWL-DfMA ontol-ogy a production line of wall panels is presented. We evaluate our approach by querying the wall panel production model about information such as activity se-quence, cost estimation per activity and also the direct material cost. This ulti-mately enable users to evaluate the overall product from the system
A Survey of Open Data Platforms in Six UK Smart City Initiatives
This paper presents a comparative analysis of the feasibility studies submitted by six UK cities, (London, Birmingham, Manchester, Glasgow, Bristol and Milton Keynes), by exploring their Open Data resources, common visions of their smart programmes and the themes of their projects. In this research, we distinguish between stored datasets that are accessible via data hubs, and live data that are only accessible via APIs in real-time. The aim of this work is to raise awareness and access of the existing data resources, encourage alignment of data collection and curation among projects with compatible objectives in different cities, and to identify the gaps in coverage that are hampering achievement of the cities' visions. Given that our findings are purely based on stored Open Data, we conclude that the Smart City dream will be only achieved in reality, where those involved in Smart Cities related projects co-operate and share both experiences and resources in order to maximise progress towards the common goal but to minimise duplication of efforts and repetition of the same mistakes
Scalable service-oriented replication with flexible consistency guarantee in the cloud
Replication techniques are widely applied in and for cloud to improve scalability and availability. In such context, the well-understood problem is how to guarantee consistency amongst different replicas and govern the trade-off between consistency and scalability requirements. Such requirements are often related to specific services and can vary considerably in the cloud. However, a major drawback of existing service-oriented replication approaches is that they only allow either restricted consistency or none at all. Consequently, service-oriented systems based on such replication techniques may violate consistency requirements or not scale well. In this paper, we present a Scalable Service Oriented Replication (SSOR) solution, a middleware that is capable of satisfying applications’ consistency requirements when replicating cloud-based services. We introduce new formalism for describing services in service-oriented replication. We propose the notion of consistency regions and relevant service oriented requirements policies, by which trading between consistency and scalability requirements can be handled within regions. We solve the associated sub-problem of atomic broadcasting by introducing a Multi-fixed Sequencers Protocol (MSP), which is a requirements aware variation of the traditional fixed sequencer approach. We also present a Region-based Election Protocol (REP) that elastically balances the workload amongst sequencers. Finally, we experimentally evaluate our approach under different loads, to show that the proposed approach achieves better scalability with more flexible consistency constraints when compared with the state-of-the-art replication technique
Integration operators for generating RDF/OWL-based user defined mediator views in a grid environment
Research and development activities relating to the grid have generally focused on applications where data is stored in files. However, many scientific and commercial applications are highly dependent on Information Servers (ISs) for storage and organization of their data. A data-information system that supports operations on multiple information servers in a grid environment is referred to as an interoperable grid system. Different perceptions by end-users of interoperable systems in a grid environment may lead to different reasons for integrating data. Even the same user might want to integrate the same distributed data in various ways to suit different needs, roles or tasks. Therefore multiple mediator views are needed to support this diversity. This paper describes our approach to supporting semantic interoperability in a heterogeneous multi-information server grid environment. It is based on using Integration Operators for generating multiple semantically rich RDF/OWL-based user defined mediator views above the grid participating ISs. These views support different perceptions of the distributed and heterogeneous data available. A set of grid services are developed for the implementation of the mediator views
A service-based system for malnutrition prevention and self-management
Malnutrition is considered one of the root causes for the occurrence of other diseases. It is particularly common in the ageing population, where it requires more efficient handling and management to enable longer home independent living. However, to achieve this, a number of related challenges need to be overcome, especially those related to management of health and disease let alone other social and logistical barriers. This paper presents the design of a distributed system that enables homecare management in the context of self-feeding and malnutrition prevention through balanced nutritional intake. The design employs a service-based system that incorporates a number of services including monitoring of activities, nutritional reasoning for assessing feeding habits, diet recommendation for food planning, and marketplace invocation for automating food shopping to meet dietary requirements. The solution is deployed in a small pilot in 12 elder adult houses that, in early results, demonstrates its holistic user-centred scalable approach for malnutrition self-management